工业机器人操纵器(例如柯机)的应用可能需要在具有静态和非静态障碍物组合的环境中有效的在线运动计划。当可用的计算时间受到限制或无法完全产生解决方案时,现有的通用计划方法通常会产生较差的质量解决方案。我们提出了一个新的运动计划框架,旨在在用户定义的任务空间中运行,而不是机器人的工作空间,该框架有意将工作空间一般性交易,以计划和执行时间效率。我们的框架自动构建在线查询的轨迹库,类似于利用离线计算的以前方法。重要的是,我们的方法还提供了轨迹长度上有限的次级优势保证。关键的想法是建立称为$ \ epsilon $ -Gromov-Hausdorff近似值的近似异构体,以便在任务空间附近的点也很接近配置空间。这些边界关系进一步意味着可以平稳地串联轨迹,这使我们的框架能够解决批次查询方案,目的是找到最小长度的轨迹顺序,这些轨迹访问一组无序的目标。我们通过几种运动型配置评估了模拟框架,包括安装在移动基础上的操纵器。结果表明,我们的方法可实现可行的实时应用,并为扩展其功能提供了有趣的机会。
translated by 谷歌翻译
尽管常规机器人系统中的每个不同任务都需要专用的场景表示形式,但本文表明,统一表示形式可以直接用于多个关键任务。我们提出了用于映射,进程和计划(LOG-GPIS-MOP)的log-gaussian过程隐式表面:基于统一表示形式的表面重建,本地化和导航的概率框架。我们的框架将对数转换应用于高斯过程隐式表面(GPIS)公式,以恢复全局表示,该表示可以准确地捕获具有梯度的欧几里得距离场,同时又是隐式表面。通过直接估计距离字段及其通过LOG-GPIS推断的梯度,提出的增量进程技术计算出传入帧的最佳比对,并在全球范围内融合以生成MAP。同时,基于优化的计划者使用相同的LOG-GPIS表面表示计算安全的无碰撞路径。我们根据最先进的方法验证了2D和3D和3D和基准测试的模拟和真实数据集的拟议框架。我们的实验表明,LOG-GPIS-MOP在顺序的音程,表面映射和避免障碍物中产生竞争结果。
translated by 谷歌翻译
外围插入的中央导管(PICC)由于其长期的血管内渗透感具有低感染率,因此已被广泛用作代表性的中央静脉线(CVC)之一。但是,PICC的尖端错位频率很高,增加了刺穿,栓塞和心律不齐等并发症的风险。为了自动,精确地检测到它,使用最新的深度学习(DL)技术进行了各种尝试。但是,即使采用了这些方法,实际上仍然很难确定尖端位置,因为多个片段现象(MFP)发生在预测和提取PICC线之前预测尖端之前所需的PICC线的过程。这项研究旨在开发一种通常应用于现有模型的系统,并通过删除模型输出的MF来更准确地恢复PICC线路,从而精确地定位了检测其处置的实际尖端位置。为此,我们提出了一个基于多阶段DL的框架后处理,以后处理现有技术的PICC线提取结果。根据是否将MFCN应用于五个常规模型,将每个均方根误差(RMSE)和MFP发病率比较性能。在内部验证中,当将MFCN应用于现有单个模型时,MFP平均提高了45%。 RMSE从平均26.85mm(17.16至35.80mm)到9.72mm(9.37至10.98mm)的平均增长了63%以上。在外部验证中,当应用MFCN时,MFP的发病率平均下降32%,RMSE平均下降了65 \%。因此,通过应用提出的MFCN,我们观察到与现有模型相比,PICC尖端位置的显着/一致检测性能提高。
translated by 谷歌翻译
In robotics and computer vision communities, extensive studies have been widely conducted regarding surveillance tasks, including human detection, tracking, and motion recognition with a camera. Additionally, deep learning algorithms are widely utilized in the aforementioned tasks as in other computer vision tasks. Existing public datasets are insufficient to develop learning-based methods that handle various surveillance for outdoor and extreme situations such as harsh weather and low illuminance conditions. Therefore, we introduce a new large-scale outdoor surveillance dataset named eXtremely large-scale Multi-modAl Sensor dataset (X-MAS) containing more than 500,000 image pairs and the first-person view data annotated by well-trained annotators. Moreover, a single pair contains multi-modal data (e.g. an IR image, an RGB image, a thermal image, a depth image, and a LiDAR scan). This is the first large-scale first-person view outdoor multi-modal dataset focusing on surveillance tasks to the best of our knowledge. We present an overview of the proposed dataset with statistics and present methods of exploiting our dataset with deep learning-based algorithms. The latest information on the dataset and our study are available at https://github.com/lge-robot-navi, and the dataset will be available for download through a server.
translated by 谷歌翻译
By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer.github.io
translated by 谷歌翻译
This paper is a technical overview of DeepMind and Google's recent work on reinforcement learning for controlling commercial cooling systems. Building on expertise that began with cooling Google's data centers more efficiently, we recently conducted live experiments on two real-world facilities in partnership with Trane Technologies, a building management system provider. These live experiments had a variety of challenges in areas such as evaluation, learning from offline data, and constraint satisfaction. Our paper describes these challenges in the hope that awareness of them will benefit future applied RL work. We also describe the way we adapted our RL system to deal with these challenges, resulting in energy savings of approximately 9% and 13% respectively at the two live experiment sites.
translated by 谷歌翻译
Robotics has been widely applied in smart construction for generating the digital twin or for autonomous inspection of construction sites. For example, for thermal inspection during concrete curing, continual monitoring of the concrete temperature is required to ensure concrete strength and to avoid cracks. However, buildings are typically too large to be monitored by installing fixed thermal cameras, and post-processing is required to compute the accumulated heat of each measurement point. Thus, by using an autonomous monitoring system with the capability of long-term thermal mapping at a large construction site, both cost-effectiveness and a precise safety margin of the curing period estimation can be acquired. Therefore, this study proposes a low-cost thermal mapping system consisting of a 2D range scanner attached to a consumer-level inertial measurement unit and a thermal camera for automated heat monitoring in construction using mobile robots.
translated by 谷歌翻译
flaping fin无人管理的水下车辆(UUV)推进系统为海军任务(例如监视和地形勘探)提供了高度机动性。最近的工作探索了时间序列神经网络替代模型的使用,以预测车辆设计和FIN运动学的推力。我们开发了一个基于搜索的逆模型,该模型利用运动学对神经网络模型进行控制系统设计。我们的反向模型找到了一组FIN运动学,其多目标目标是达到目标推力并在拍打周期之间创建平滑的运动学过渡。我们演示了整合此逆模型的控制系统如何使在线,周期周期调整以优先考虑不同的系统目标。
translated by 谷歌翻译
在本文中,我们解决了单眼散景合成的问题,我们试图从单个全焦点图像中呈现浅深度图像。与DSLR摄像机不同,由于移动光圈的物理限制,这种效果无法直接在移动摄像机中捕获。因此,我们提出了一种基于网络的方法,该方法能够从单个图像输入中渲染现实的单眼散景。为此,我们根据预测的单眼深度图引入了三个新的边缘感知散景损失,该图在模糊背景时锐化了前景边缘。然后,使用对抗性损失对该模型进行固定,从而产生逼真的玻璃效果。实验结果表明,我们的方法能够在处理复杂场景的同时产生令人愉悦的自然散景效果,并具有锋利的边缘。
translated by 谷歌翻译
视觉惯性探测器和猛击算法广泛用于各种领域,例如服务机器人,无人机和自动驾驶汽车。大多数SLAM算法都是基于地标是静态的。但是,在现实世界中,存在各种动态对象,它们会降低姿势估计精度。此外,暂时的静态对象,在观察过程中是静态的,但在视线视线时移动,触发假循环封闭。为了克服这些问题,我们提出了一个新颖的视觉惯性大满贯框架,称为dynavins,它对动态对象和暂时静态对象都具有强大的态度。在我们的框架中,我们首先提出一个可靠的捆绑捆绑调整,该调整可以通过利用IMU预融合估计的姿势先验来拒绝动态对象的功能。然后,提出了一个密钥帧分组和基于多种假设的约束分组方法,以减少循环闭合中暂时静态对象的效果。随后,我们在包含许多动态对象的公共数据集中评估了我们的方法。最后,通过成功拒绝动态和暂时静态对象的效果,我们的测力量与其他最先进方法相比,我们的测力素具有有希望的性能得到证实。我们的代码可在https://github.com/url-kaist/dynavins上找到。
translated by 谷歌翻译